Nude Child Photos and Other ‘Toxic Content’ Are on Giphy, Researchers Say

您所在的位置:网站首页 Google jpg Nude Child Photos and Other ‘Toxic Content’ Are on Giphy, Researchers Say

Nude Child Photos and Other ‘Toxic Content’ Are on Giphy, Researchers Say

2023-02-25 05:24| 来源: 网络整理| 查看: 265

The report was released Friday by L1ght, an Israel-based content monitoring startup focused on making the internet safer for children. L1ght shared a few examples of the toxic content on Giphy with Fortune, including a short clip that, though not sexually explicit, depicted an adult male seemingly assaulting a girl who appeared to be pre-adolescent against a backdrop of white supremacist symbols. L1ght also shared several other disturbing images hosted on Giphy, including another non-graphic depiction of sexual assault apparently lifted from a film.

L1ght co-founder Ron Porat characterized the shared examples as “very, very mild in the context of what we see” on Giphy. The non-explicit images are, Porat said, “the tip of the iceberg—the first breadcrumbs you see.”

L1ght claims that by following those breadcrumbs, its proprietary search tools and team of researchers unearthed a “seedy underbelly” of content on Giphy, including nude and sexually explicit images of children. These images are hidden from most users and even from Giphy’s moderation, but can, L1ght claims, be accessed using obscure search terms in public search engines. 

Giphy isn’t quite a household name, but it is a nearly omnipresent part of the fabric of online social media. The site hosts and plays looping, six-second clips in the GIF format (hence the name), often taken from films and television shows. They can be embedded both on web pages and in posts on platforms including Facebook, Twitter, iMessage, and Snapchat. It also has an animated ‘sticker’ format that is integrated with Gen Z-centric platforms including Snapchat, Instagram, TikTok, and Twitch. Giphy has high-profile investors including Lightspeed Venture Partners and was valued at $600 million in 2016—the last time the private company raised money, according to Crunchbase.

Content on Giphy is just one part of an accelerating epidemic of sexual images of children being spread online. Facebook Messenger was responsible for nearly two-thirds of the 18.4 million worldwide reports of child exploitation images made in 2018. In early 2019, it was discovered that Instagram was being used to share links to private troves of child sexual imagery hosted on Dropbox. L1ght cites problems not just on social media, but also in games popular among teens and children, such as Fortnite and Minecraft.

Fortune was able to independently confirm that, at the time of reporting, Giphy does host sexualized images of girls appearing under the legal age of consent. Those images can, as L1ght claims, be found on public search engines using hashtags that also lead to more explicit content elsewhere on the web. 

In response to L1ght’s claims, Giphy stated in part that it employs “an extensive array of moderation protocols to ensure the safety of all publicly indexed content, and our Trust + Safety team leverages industry standard best practices (and beyond) so that anything that violates our Community Guidelines is removed,” and that “we take any reports we receive of inappropriate content seriously (public or private), and employ immediate action to remove content that violates our Community Guidelines upon discovery.”

Giphy acknowledged that the site “does not automatically moderate content set to ‘private,’” which Giphy says is “consistent with industry standards.” This content is also not indexed in Giphy’s own search tools. Though Giphy does not actively monitor private content, users are able to flag offensive content even in private accounts, and Giphy will remove violating content from private accounts after it is flagged.

Giphy also stated that “content that is set to private is prevented from being visible or indexed in Google or Bing via industry standard site settings that most search engines comply with.” 

But L1ght appears to have identified loopholes in these precautions, including the use of less-mainstream search engines which may not comply with those indexing standards.

Porat said that users can upload offensive content on Giphy, and then “you let the search engines index that under certain hashtags. Then you put [the account] on private mode, and Giphy itself will not index it. Giphy will be almost blind to that.”

When entered into Yandex, a Russia-based search engine, the search terms that L1ght highlighted did indeed point to more than a dozen sexualized images of girls appearing to be under the legal age originally hosted on Giphy. Some had been removed from the platform, supporting Giphy’s claims that the company is serious about moderating its content. But the images were archived by Yandex and still viewable. The same search terms also revealed sexualized and exploitative images of children and explicit material elsewhere on the open web.

Experts in online child exploitation often refer to sexualized but non-explicit images of children as “child erotica,” a category which can include images taken from mainstream sources such as catalogs or films. These images are often used to both groom future victims of child abuse and to enable perpetrators by normalizing the sexualization of children, according to Brian Levine, a computer science professor at the University of Massachusetts-Amherst who frequently collaborates with law enforcement in pursuing and prosecuting child exploitation online.

“I’m not sure that [Giphy has] the right tools to identify these materials and remove it,” said L1ght CEO Zohar Levkovitz. Levkovitz was previously the founder of the ad-tech startup Amobee, which was acquired by SingTel in 2012.

Levkovitz described these tactics for hiding and linking content as part of a migration of exploitative material from the so-called “dark web” to public hosting services. “Today everything that was hidden in the dark web is indexed in plain sight, for everyone to see.”

L1ght also claims it has found information hidden within pictures posted to Giphy. The company says that information, placed using photo editing software and often invisible to a casual observer, includes hashtags, “secret callsigns,” and even instructions describing how to locate sexualized or exploitative images of children on Giphy and elsewhere. This, according to L1ght, turns Giphy into “an infrastructure that allows child abusers to promote their content with each other.” 

Giphy says it “monitors global trends to improve our ability to spot and identify hidden content in images that may not be immediately viewable by the human eye,” and immediately removes such content when it includes “blacklisted terms” or content that violates its policies.

The fostering of online communities around sexualized images of children through public platforms like Giphy may have particularly insidious and long-lasting impacts, according to Levine. Even non-explicit child images can “normalize the behavior among the perpetrators” of child abuse, and “can be used to egg each other on. When perpetrators get together to form a community, [they] train each other—where to find other images, how to evade detection, and how to groom [victims].”

Other terms flagged by L1ght found graphic images of violent self-harm and so-called “thinspo” content that promotes eating disorders currently hosted on Giphy. Both of those categories of content are explicitly prohibited by Giphy’s terms of service.

Giphy has previously had problems with offensive content. In 2018, Snapchat and Instagram both briefly suspended Giphy stickers on their platforms because a GIF containing a racial slur made it through Giphy’s moderation process. After that incident, Snap Inc. reportedly worked with Giphy to revamp those moderation processes before reinstating the service.

Giphy’s problems balancing user privacy with child safety reflect a much larger and seemingly intractable challenge for internet content and communications services. Many platforms are considering strengthening their users’ privacy and allowing for more personal sharing, such as creating GIFs of your friends dancing to send within private groups on platforms like iMessage or Messenger. Facebook has spelled out plans to add end-to-end encryption to its platform, and the spread of privacy tools has been praised by advocates including Edward Snowden.

But, Levine said, policies like Giphy’s come with serious tradeoffs. “This is an example of the balance that we, in society, and these tech firms in particular, have to consider. They’re trying to provide privacy for users, but sometimes that privacy enables crime, including harm to children.”



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3